74 research outputs found

    Markov semigroups with hypocoercive-type generator in Infinite Dimensions: Ergodicity and Smoothing

    Full text link
    We start by considering infinite dimensional Markovian dynamics in R^m generated by operators of hypocoercive type and for such models we obtain short and long time pointwise estimates for all the derivatives, of any order and in any direction, along the semigroup. We then look at infinite dimensional models (in (Rm)^{Z ^d}) produced by the interaction of infinitely many finite dimensional dissipative dynamics of the type indicated above. For these infinite dimensional models we study finite speed of propagation of information, well-posedness of the semigroup, time behaviour of the derivatives and strong ergodicity problem

    Uniform in time estimates for the weak error of the Euler method for SDEs and a Pathwise Approach to Derivative Estimates for Diffusion Semigroups

    Full text link
    We present a criterion for uniform in time convergence of the weak error of the Euler scheme for Stochastic Differential equations (SDEs). The criterion requires i) exponential decay in time of the space-derivatives of the semigroup associated with the SDE and ii) bounds on (some) moments of the Euler approximation. We show by means of examples (and counterexamples) how both i) and ii) are needed to obtain the desired result. If the weak error converges to zero uniformly in time, then convergence of ergodic averages follows as well. We also show that Lyapunov-type conditions are neither sufficient nor necessary in order for the weak error of the Euler approximation to converge uniformly in time and clarify relations between the validity of Lyapunov conditions, i) and ii). Conditions for ii) to hold are studied in the literature. Here we produce sufficient conditions for i) to hold. The study of derivative estimates has attracted a lot of attention, however not many results are known in order to guarantee exponentially fast decay of the derivatives. Exponential decay of derivatives typically follows from coercive-type conditions involving the vector fields appearing in the equation and their commutators; here we focus on the case in which such coercive-type conditions are non-uniform in space. To the best of our knowledge, this situation is unexplored in the literature, at least on a systematic level. To obtain results under such space-inhomogeneous conditions we initiate a pathwise approach to the study of derivative estimates for diffusion semigroups and combine this pathwise method with the use of Large Deviation Principles.Comment: 47 pages and 9 figure

    Some remarks on degenerate hypoelliptic Ornstein-Uhlenbeck operators

    Get PDF
    37 pages, 3 figuresInternational audienceWe study degenerate hypoelliptic Ornstein-Uhlenbeck operators in L2L^2 spaces with respect to invariant measures. The purpose of this article is to show how recent results on general quadratic operators apply to the study of degenerate hypoelliptic Ornstein-Uhlenbeck operators. We first show that some known results about the spectral and subelliptic properties of Ornstein-Uhlenbeck operators may be directly recovered from the general analysis of quadratic operators with zero singular spaces. We also provide new resolvent estimates for hypoelliptic Ornstein-Uhlenbeck operators. We show in particular that the spectrum of these non-selfadjoint operators may be very unstable under small perturbations and that their resolvents can blow-up in norm far away from their spectra. Furthermore, we establish sharp resolvent estimates in specific regions of the resolvent set which enable us to prove exponential return to equilibrium

    Long-time behaviour of degenerate diffusions: UFG-type SDEs and time-inhomogeneous hypoelliptic processes

    Get PDF
    We study the long time behaviour of a large class of diffusion processes on RNR^N, generated by second order differential operators of (possibly) degenerate type. The operators that we consider {\em need not} satisfy the H\"ormander condition. Instead, they satisfy the so-called UFG condition, introduced by Herman, Lobry and Sussman in the context of geometric control theory and later by Kusuoka and Stroock, this time with probabilistic motivations. In this paper we study UFG diffusions and demonstrate the importance of such a class of processes in several respects: roughly speaking i) we show that UFG processes constitute a family of SDEs which exhibit multiple invariant measures and for which one is able to describe a systematic procedure to determine the basin of attraction of each invariant measure (equilibrium state). ii) We use an explicit change of coordinates to prove that every UFG diffusion can be, at least locally, represented as a system consisting of an SDE coupled with an ODE, where the ODE evolves independently of the SDE part of the dynamics. iii) As a result, UFG diffusions are inherently "less smooth" than hypoelliptic SDEs; more precisely, we prove that UFG processes do not admit a density with respect to Lebesgue measure on the entire space, but only on suitable time-evolving submanifolds, which we describe. iv) We show that our results and techniques, which we devised for UFG processes, can be applied to the study of the long-time behaviour of non-autonomous hypoelliptic SDEs and therefore produce several results on this latter class of processes as well. v) Because processes that satisfy the (uniform) parabolic H\"ormander condition are UFG processes, our paper contains a wealth of results about the long time behaviour of (uniformly) hypoelliptic processes which are non-ergodic, in the sense that they exhibit multiple invariant measures.Comment: 66 page

    Diffusion Limit for the Random Walk Metropolis Algorithm out of stationarity

    Get PDF

    Non-stationary phase of the MALA algorithm

    Get PDF
    The Metropolis-Adjusted Langevin Algorithm (MALA) is a Markov Chain Monte Carlo method which creates a Markov chain reversible with respect to a given target distribution, pi^N, with Lebesgue density on R^N; it can hence be used to approximately sample the target distribution. When the dimension N is large a key question is to determine the computational cost of the algorithm as a function of N. One approach to this question, which we adopt here, is to derive diffusion limits for the algorithm. The family of target measures that we consider in this paper are, in general, in non-product form and are of interest in applied problems as they arise in Bayesian nonparametric statistics and in the study of conditioned diffusions. Furthermore, we study the situation, which arises in practice, where the algorithm is started out of stationarity. We thereby significantly extend previous works which consider either only measures of product form, when the Markov chain is started out of stationarity, or measures defined via a density with respect to a Gaussian, when the Markov chain is started in stationarity. We prove that, in the non-stationary regime, the computational cost of the algorithm is of the order N^(1/2) with dimension, as opposed to what is known to happen in the stationary regime, where the cost is of the order N^(1/3).Comment: 37 pages. arXiv admin note: text overlap with arXiv:1405.489

    A Function Space HMC Algorithm With Second Order Langevin Diffusion Limit

    Get PDF
    We describe a new MCMC method optimized for the sampling of probability measures on Hilbert space which have a density with respect to a Gaussian; such measures arise in the Bayesian approach to inverse problems, and in conditioned diffusions. Our algorithm is based on two key design principles: (i) algorithms which are well-defined in infinite dimensions result in methods which do not suffer from the curse of dimensionality when they are applied to approximations of the infinite dimensional target measure on \bbR^N; (ii) non-reversible algorithms can have better mixing properties compared to their reversible counterparts. The method we introduce is based on the hybrid Monte Carlo algorithm, tailored to incorporate these two design principles. The main result of this paper states that the new algorithm, appropriately rescaled, converges weakly to a second order Langevin diffusion on Hilbert space; as a consequence the algorithm explores the approximate target measures on \bbR^N in a number of steps which is independent of NN. We also present the underlying theory for the limiting non-reversible diffusion on Hilbert space, including characterization of the invariant measure, and we describe numerical simulations demonstrating that the proposed method has favourable mixing properties as an MCMC algorithm.Comment: 41 pages, 2 figures. This is the final version, with more comments and an extra appendix adde

    Reversible and non-reversible Markov Chain Monte Carlo algorithms for reservoir simulation problems

    Get PDF
    We compare numerically the performance of reversible and non-reversible Markov Chain Monte Carlo algorithms for high dimensional oil reservoir problems; because of the nature of the problem at hand, the target measures from which we sample are supported on bounded domains. We compare two strategies to deal with bounded domains, namely reflecting proposals off the boundary and rejecting them when they fall outside of the domain. We observe that for complex high dimensional problems reflection mechanisms outperform rejection approaches and that the advantage of introducing non-reversibility in the Markov Chain employed for sampling is more and more visible as the dimension of the parameter space increases

    Diffusion Limit for the Random Walk Metropolis Algorithm out of stationarity

    Get PDF
    The Random Walk Metropolis (RWM) algorithm is a Metropolis–Hastings Markov Chain Monte Carlo algorithm designed to sample from a given target distribution π^N with Lebesgue density on R^N. Like any other Metropolis–Hastings algorithm, RWM constructs a Markov chain by randomly proposing a new position (the “proposal move”), which is then accepted or rejected according to a rule which makes the chain reversible with respect to π^N. When the dimension N is large, a key question is to determine the optimal scaling with N of the proposal variance: if the proposal variance is too large, the algorithm will reject the proposed moves too often; if it is too small, the algorithm will explore the state space too slowly. Determining the optimal scaling of the proposal variance gives a measure of the cost of the algorithm as well. One approach to tackle this issue, which we adopt here, is to derive diffusion limits for the algorithm. Such an approach has been proposed in the seminal papers (Ann. Appl. Probab. 7 (1) (1997) 110–120; J. R. Stat. Soc. Ser. B. Stat. Methodol. 60 (1) (1998) 255–268). In particular, in (Ann. Appl. Probab. 7 (1) (1997) 110–120) the authors derive a diffusion limit for the RWM algorithm under the two following assumptions: (i) the algorithm is started in stationarity; (ii) the target measure π^N is in product form. The present paper considers the situation of practical interest in which both assumptions (i) and (ii) are removed. That is (a) we study the case (which occurs in practice) in which the algorithm is started out of stationarity and (b) we consider target measures which are in non-product form. Roughly speaking, we consider target measures that admit a density with respect to Gaussian; such measures arise in Bayesian nonparametric statistics and in the study of conditioned diffusions. We prove that, out of stationarity, the optimal scaling for the proposal variance is O(N^(−1)), as it is in stationarity. In this optimal scaling, a diffusion limit is obtained and the cost of reaching and exploring the invariant measure scales as O(N). Notice that the optimal scaling in and out of stationatity need not be the same in general, and indeed they differ e.g. in the case of the MALA algorithm (Stoch. Partial Differ. Equ. Anal Comput. 6 (3) (2018) 446–499). More importantly, our diffusion limit is given by a stochastic PDE, coupled to a scalar ordinary differential equation; such an ODE gives a measure of how far from stationarity the process is and can therefore be taken as an indicator of convergence. In this sense, this paper contributes understanding to the old-standing problem of monitoring convergence of MCMC algorithms
    corecore